11 research outputs found

    Permission-based Risk Signals for App Behaviour Characterization in Android Apps

    Get PDF
    With the parallel growth of the Android operating system and mobile malware, one of the ways to stay protected from mobile malware is by observing the permissions requested. However, without careful consideration of these permissions, users run the risk of an installed app being malware, without any warning that might characterize its nature. We propose a permission-based risk signal using a taxonomy of sensitive permissions. Firstly, we analyse the risk of an app based on the permissions it requests, using a permission sensitivity index computed from a risky permission set. Secondly, we evaluate permission mismatch by checking what an app requires against what it requests. Thirdly, we evaluate security rules based on our metrics to evaluate corresponding risks. We evaluate these factors using datasets of benign and malicious apps (43580 apps) and our result demonstrates that the proposed framework can be used to improve risk signalling of Android apps with a 95% accuracy

    Towards using unstructured user input request for malware detection

    Get PDF
    Privacy analysis techniques for mobile apps are mostly based on system-centric data originating from well-defined system API calls. But these apps may also collect sensitive information via their unstructured input sources that elude privacy analysis. The consequence is that users are unable to determine the extent to which apps may impact on their privacy when downloaded and installed on mobile devices. To this end, we present a privacy analysis framework for unstructured input. Our approach leverages app meta-data descriptions and taxonomy of sensitive information, to identify sensitive unstructured user input. The outcome is an understanding of the level of user privacy risk posed by an app based on its unstructured user input request. Subsequently, we evaluate the usefulness of the unstructured sensitive user input for malware detection. We evaluate our methods using 175K benign apps and 175K malware APKs. The outcome highlights that malicious app detector built on unstructured sensitive user achieve an average balance accuracy of 0.996 demonstrated with Trojan-Banker and Trojan-SMS when the malware family and target applications are known and balanced accuracy of 0.70 with generic malware

    Privacy analysis of mobile apps

    No full text
    The increasing popularity of the Android OS has resulted in its user base surging past 2.5 billion monthly active users, which has made cybercriminals and non-criminal actors attracted to the OS because of the amount and quality of information they can access. As malicious apps are at an arms race with their benign counterparts in malware analysis, coupled with the evolving nature of the Android ecosystem, it is important to continuously analyse the ecosystem for privacy and security issues. The thesis proposes a privacy and security analysis approach for mobile software systems. The research methodology abstracts the mobile security problem as an access control problem, where the behavioural elements mirror the standard elements in an access control system - identification, authentication and authorization. This involves analyzing the app’s behavioural elements for unstructured user input, user-granted permissions, UI textual description, and literal app/product description. Next, the effectiveness of the proposed approach was evaluated in the context of mobile systems security, particularly in the area of malware analysis and its mitigation. The approaches are different because they utilize different aspects of the app metadata, such that security analysis of apps could be done depending on what aspect of the app information is available. Overall, this thesis contributes to knowledge around mobile software systems for the design of robust malware detection tools, a security-oriented overview of mobile systems behaviour and reliable risk signalling for privacy awareness. The findings demonstrated great promise in using the elements of access control for mobile systems in anomaly detection and sustainable malware mitigation. The proposed approach succeeded where other approaches have not, in malware analysis

    Nation-state Threat Actor Attribution Using Fuzzy Hashing

    No full text

    Runtime and design time completeness checking of dangerous android app permissions against GDPR

    No full text
    Data and privacy laws, such as the GDPR, require mobile apps that collect and process the personal data of their citizens to have a legally-compliant policy. Since these mobile apps are hosted on app distribution platforms such as Google Play Store and App Store, the app publishers also require the app developers who wish to submit a new app or make changes to an existing app to be transparent about their app privacy practices regarding handling sensitive user data that requires sensitive permissions such as calendar, camera, microphone. To verify compliance with privacy regulators and app distribution platforms, the app privacy policies and permissions are investigated for consistency. However, little has been done to investigate GDPR completeness checking within the Android permission ecosystem. In this paper, we investigate the design and runtime approaches towards completeness checking of sensitive (‘dangerous’) Android permission policy declarations against GDPR. In this paper, we investigate the design and runtime approaches towards completeness checking of dangerous Android permission policy declarations against GDPR. Leveraging the MPP-270 annotated corpus that describes permission declarations in application privacy policies, six natural language processing and language modelling algorithms are developed to measure permission completeness during runtime while a proof of concept Class Unified Modeling Language Diagram (UML) tool is developed to generate GDPR-compliant permission policy declarations using UML diagrams during design time. This paper makes a significant contribution to the identification of appropriate permission policy declaration methodologies that a developer can use to target particular GDPR laws, increasing GDPR compliance by 12% in cases during runtime using BERT word embedding, measuring GDPR compliance in permission policy sentences, and a UML-driven tool to generate compliant permission declarations

    Security-oriented view of app behaviour using textual descriptions and user-granted permission requests

    Get PDF
    One of the major Android security mechanisms for enforcing restrictions on the core facilities of a device that an app can access is permission control. However, there is an enormous amount of risk with regards to granting permissions since 97% of malicious mobile malware targets Android. As malware is becoming more complicated, recent research proposed a promising approach that checks implemented app behaviour against advertised app behaviour for inconsistencies. In this paper, we investigate such inconsistencies by matching the permission an app requests with the natural language descriptions of the app which gives an intuitive idea of user expected behaviour of the app. Then, we propose exploiting an enhanced app description to improve malware detection based on app descriptions and permissions. To evaluate the performance, we carried out various experiments with 56K apks. Our proposed enhancement reduces the false positives of the state-of-the-art approaches, Whyper, AutoCog, CHABADA by at least 87%, and TAPVerifier by at least 57%. We proposed a novel approach for evaluating the robustness of textual descriptions for permission-based malware detection. Our experimental results demonstrate a high detection recall rate of 98.72% on 71 up-to-date malware families and a precision of 90% on obfuscated samples of benign and malware apks. Our results also show that analysing sensitive permissions requested and UI textual descriptions provides a promising avenue for sustainable Android malware detection

    Voice spoofing detection for multiclass attack classification using deep learning

    No full text
    Voice biometric authentication is increasingly gaining adoption in organisations with high-volume identity verifications and for providing access to physical and other virtual spaces. In this form of authentication, the user’s identity is verified with their voice. However, these systems are susceptible to voice spoofing attacks as malicious actors employ different types of attacks such as speech synthesis, voice conversion or imitations, and recorded replays to spoof the Automatic Speaker Verification (ASV) system or for spam communications. In this work, we provide a voice spoofing countermeasure as a binary classification problem, that classifies real and fake audio, and also as a multiclass classification problem to detect voice conversion, synthesis and replay attacks. We investigated numerous audio features and examined each feature capability alongside state-of-the-art deep learning algorithms including convolutional neural networks (CNN), WaveNet, and recurrent neural network variants — Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM) models. Using a large dataset of 419,426 audio files for experiments, we evaluated the deep learning models for their effectiveness against voice spoofing attacks. The binary class CNN achieved a false positive rate (FPR) of 0.0216, while the multiclass solutions using CNN, WaveNet, LSTMs and GRUs achieved an FPR of 0.003, 0.0260, 0.0302 and 0.0358 respectively. We extended the evaluation of the models by including the real-time classification using microphone voice audio and user-uploaded audio to demonstrate the practical implications and deployability
    corecore